Á¤º¸°úÇÐȸ ³í¹®Áö A : ½Ã½ºÅÛ ¹× ÀÌ·Ð
Current Result Document :
ÇѱÛÁ¦¸ñ(Korean Title) |
°íÁ¤¿ì¼±¼øÀ§ ¼±Á¡°¡´É ½ºÄÉÁ층 ȯ°æ¿¡¼ÀÇ Ä³½¬¿¡ ÀÇÇÑ ¼±Á¡Áö¿¬½Ã°£ ºÐ¼® |
¿µ¹®Á¦¸ñ(English Title) |
Analysis of Cache-related Preemption Delay in Fixed-priority Preemptive Scheduling |
ÀúÀÚ(Author) |
ÀÌâ°Ç
ÇÑÁÖ¼±
¼¾ç¹Î
¹Î»ó·Ä
Ç϶õ
È«¼º¼ö
¹ÚâÀ±
À̹μ®
±èÁ¾»ó
Chang-Gun Lee
Joosun Hahn
Yang-Min Seo
Sang Lyul Min
Rhan Ha
Seongsoo Hong
Chang Yun Park
Minsuk Lee
Chong Sang Kim
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 25 NO. 03 PP. 0243 ~ 0256 (1998. 03) |
Çѱ۳»¿ë (Korean Abstract) |
ÇÁ·Î¼¼¼¿Í ¸Þ¸ð¸®»çÀÌÀÇ ¼ÓµµÂ÷°¡ Áõ°¡ÇÔ¿¡ µû¶ó, ¹ü¿ë ÄÄÇ»ÅÍ ½Ã½ºÅÛ¿¡¼ »Ó¸¸ ¾Æ´Ï¶ó ½Ç½Ã°£ ÄÄÇ»ÅÍ ½Ã½ºÅÛ¿¡¼µµ ij½¬ ¸Þ¸ð¸®ÀÇ Çʿ伺ÀÌ Áõ´ëµÇ¾ú´Ù. ij½¬ ¸Þ¸ð¸®ÀÇ »ç¿ëÀº ŽºÅ©ÀÇ Æò±Õ ¼öÇà½Ã°£À» °¨¼Ò½ÃÅ°´Â ±àÁ¤ÀûÀÎ Ãø¸éÀ» °¡Áö°í ÀÖ´Â ¹Ý¸é, ŽºÅ©°£ÀÇ ¼±Á¡ÀÌ Çã¿ëµÇ´Â ¸ÖƼŽºÅ· ȯ°æ¿¡¼´Â ŽºÅ©ÀÇ ¼öÇà½Ã°£¿¡ ¿¹ÃøÇϱâ Èûµç Ãß°¡ºñ¿ë(ij½¬¿¡ ÀÇÇÑ ¼±Á¡Áö¿¬½Ã°£)À» ÃÊ·¡ÇÑ´Ù´Â ºÎÁ¤Àû Ãø¸éµµ °¡Áö°í ÀÖ´Ù. ÀÌ·¯ÇÑ ºÎÁ¤Àû Ãø¸éÀº ÁÖ¾îÁø ½Ã°£¾È¿¡ ŽºÅ©ÀÇ ¼öÇà ¿Ï·á¸¦ º¸ÀåÇØ¾ß ÇÏ´Â ½Ç½Ã°£ ÄÄÇ»ÅÍ ½Ã½ºÅÛÀÌ Ä³½¬ ¸Þ¸ð¸®¸¦ »ç¿ëÇÏ´Â °ÍÀ» ¾î·Æ°Ô ¸¸µå´Â ¿ä¼Ò·Î ÀÛ¿ëÇÑ´Ù. º» ³í¹®¿¡¼´Â °íÁ¤¿ì¼±¼øÀ§ ¼±Á¡°¡´É ½ºÄÉÁ층 ȯ°æ¿¡¼ º¸´Ù Á¤È®ÇÑ°Ô Ä³½¬¿¡ ÀÇÇÑ ¼±Á¡Áö¿¬½Ã°£À» ¿¹ÃøÇϱâ À§ÇÑ ºÐ¼®±â¹ýÀ» Á¦¾ÈÇÑ´Ù. ij½¬¿¡ ÀÇÇÑ ¼±Á¡Áö¿¬½Ã°£À» ¿¹ÃøÇϱâ À§ÇÑ ÇöÀç±îÁöÀÇ ¿¬±¸´Â |
¿µ¹®³»¿ë (English Abstract) |
Cache memory is increasingly being used in real-time computer systems as well as general purpose computer systems due to the ever increasing speed gap between processors and main memory.
However, the use of cache memory in real-time systems introduces variation to task execution time when preemptions are allowed among tasks. This paper proposes a technique for analyzing the cache-related preemption delays of tasks that cause such unpredictable variation in task execution time in the context of fixed-priority preemptive scheduling. Unlike previous approaches that make a pessimistic assumption that each memory block of a preempting task replaces from cache memory a memory block meeded by a lower priority task, the proposed technique considers the usefulness of cache blocks in computing cache-related preemption delay to improve prediction accuracy. The proposed technique consists of two steps. The first step performs a per-task analysis to estimate cache-related preemption cost for a given number of preemptions. The second step computes the worst case respense time of each task using a respense time equation and a linear programming technique which takes as its input the preemption cost information of tasks obtained in the first step. Our experimental results show that the proposed technique gives a prediction of the worst case cache-related preemption delay that is up to 60% tighter than that obtained from approaches.
|
Å°¿öµå(Keyword) |
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|